翻訳と辞書
Words near each other
・ Data Guard
・ Data haven
・ Data hierarchy
・ Data hub
・ Data I/O
・ Data in transit
・ Data in use
・ Data independence
・ Data infrastructure
・ Data Infrastructure Building Blocks (DIBBs)
・ Data integration
・ Data integrator
・ Data integrity
・ Data archive
・ Data as a service
Data assimilation
・ Data at rest
・ Data auditing
・ Data Authentication Algorithm
・ Data availability
・ Data bank
・ Data Base Task Group
・ Data based decision making
・ Data Becker
・ Data binding
・ Data binning
・ Data breach
・ Data buffer
・ Data cable
・ Data Cap Integrity Act


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Data assimilation : ウィキペディア英語版
Data assimilation

A numerical model determines how a model state at a particular time changes into the model state at a later time. Even if the numerical model were a perfect representation of an actual system (which of course can rarely if ever be the case) in order to make a perfect forecast of the future state of the actual system the initial state of the numerical model would also have to be a perfect representation of the actual state of the system.
Data assimilation or, more-or-less synonymously, data analysis is the process by which observations of the actual system are incorporated into the model state of a numerical model of that system. Applications of data assimilation arise in many fields of geosciences, perhaps most importantly in weather forecasting and hydrology.
A frequently encountered problem is that the number of observations of the actual system available for analysis is orders of magnitude smaller than the number of values required to specify the model state. The initial state of the numerical model cannot therefore be determined from the available observations alone. Instead, the numerical model is used to propagate information from past observations to the current time. This is then combined with current observations of the actual system using a data assimilation method.
Most commonly this leads to the numerical modelling system alternately performing a numerical forecast and a data analysis. This is known as analysis/forecast cycling. The forecast from the previous analysis to the current one is frequently called the background.
The analysis combines the information in the background with that of the current observations, essentially by taking a weighted mean of the two; using estimates of the uncertainty of each to determine their weighting factors. The data assimilation procedure is invariably multivariate and includes approximate relationships between the variables. The observations are of the actual system, rather than of the model's incomplete representation of that system, and so may have different relationships between the variables from those in the model. To reduce the impact of these problems incremental analyses are often performed. That is the analysis procedure determines increments which when added to the background yield the analysis. As the increments are generally small compared to the background values this leaves the analysis less affected by 'balance' errors in the analysed increments. Even so some filtering, known as initialisation, may be required to avoid problems, such as the excitement of unphysical wave like activity or even numerical instability, when running the numerical model from the analysed initial state.
As an alternative to ''analysis/forecast cycles'', data assimilation can proceed by some sort of continuous process such as ''nudging'', where the model equations themselves are modified to add terms that continuously ''push'' the model towards the observations.
==Data assimilation as statistical estimation==
In data assimilation applications, the analysis and forecasts are best thought of as probability distributions. The analysis step is an application of Bayes' theorem and the overall assimilation procedure is an example of recursive Bayesian estimation. However, the probabilistic analysis is usually simplified to a computationally feasible form. Advancing the probability distribution in time would be done exactly in the general case by the Fokker-Planck equation, but that is unrealistically expensive, so various approximations operating on simplified representations of the probability distributions are used instead. If the probability distributions are normal, they can be represented by their mean and covariance, which gives rise to the Kalman filter. However it is not feasible to maintain the covariance because of the large number of degrees of freedom in the state, so various approximations are used instead.
Many methods represent the probability distributions only by the mean and impute some covariance instead. In the basic form, such analysis step is known as optimal statistical interpolation. Adjusting the initial value of the mathematical model instead of changing the state directly at the analysis time is the essence of the variational methods, 3DVAR and 4DVAR. Nudging, also known as Newtonian relaxation or 4DDA, is essentially the same as proceeding in continuous time rather than in discrete analysis cycles (the Kalman-Bucy filter), again with imputing simplified covariance.
Ensemble Kalman filters represent the probability distribution by an ensemble of simulations, and the covariance is approximated by sample covariance.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Data assimilation」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.